710 research outputs found

    The Computational Complexity of Quantum Determinants

    Full text link
    In this work, we study the computational complexity of quantum determinants, a qq-deformation of matrix permanents: Given a complex number qq on the unit circle in the complex plane and an n×nn\times n matrix XX, the qq-permanent of XX is defined as Perq(X)=∑σ∈Snqℓ(σ)X1,σ(1)…Xn,σ(n),\mathrm{Per}_q(X) = \sum_{\sigma\in S_n} q^{\ell(\sigma)}X_{1,\sigma(1)}\ldots X_{n,\sigma(n)}, where ℓ(σ)\ell(\sigma) is the inversion number of permutation σ\sigma in the symmetric group SnS_n on nn elements. The function family generalizes determinant and permanent, which correspond to the cases q=−1q=-1 and q=1q=1 respectively. For worst-case hardness, by Liouville's approximation theorem and facts from algebraic number theory, we show that for primitive mm-th root of unity qq for odd prime power m=pkm=p^k, exactly computing qq-permanent is ModpP\mathsf{Mod}_p\mathsf{P}-hard. This implies that an efficient algorithm for computing qq-permanent results in a collapse of the polynomial hierarchy. Next, we show that computing qq-permanent can be achieved using an oracle that approximates to within a polynomial multiplicative error and a membership oracle for a finite set of algebraic integers. From this, an efficient approximation algorithm would also imply a collapse of the polynomial hierarchy. By random self-reducibility, computing qq-permanent remains to be hard for a wide range of distributions satisfying a property called the strong autocorrelation property. Specifically, this is proved via a reduction from 11-permanent to qq-permanent for O(1/n2)O(1/n^2) points zz on the unit circle. Since the family of permanent functions shares common algebraic structure, various techniques developed for the hardness of permanent can be generalized to qq-permanents

    Optimal quantum algorithm for polynomial interpolation

    Get PDF
    We consider the number of quantum queries required to determine the coefficients of a degree-d polynomial over GF(q). A lower bound shown independently by Kane and Kutin and by Meyer and Pommersheim shows that d/2+1/2 quantum queries are needed to solve this problem with bounded error, whereas an algorithm of Boneh and Zhandry shows that d quantum queries are sufficient. We show that the lower bound is achievable: d/2+1/2 quantum queries suffice to determine the polynomial with bounded error. Furthermore, we show that d/2+1 queries suffice to achieve probability approaching 1 for large q. These upper bounds improve results of Boneh and Zhandry on the insecurity of cryptographic protocols against quantum attacks. We also show that our algorithm's success probability as a function of the number of queries is precisely optimal. Furthermore, the algorithm can be implemented with gate complexity poly(log q) with negligible decrease in the success probability. We end with a conjecture about the quantum query complexity of multivariate polynomial interpolation.Comment: 17 pages, minor improvements, added conjecture about multivariate interpolatio

    Applications and verification of quantum computers

    Get PDF
    Quantum computing devices can solve problems that are infeasible for classical computers. While rigorously proving speedups over existing classical algorithms demonstrates the usefulness of quantum computers, analyzing the limits on efficient processes for computational tasks allows us to better understand the power of quantum computation. Indeed, hard problems for quantum computers also enable useful cryptographic applications. In this dissertation, we aim to understand the limits on efficient quantum computation and base applications on hard problems for quantum computers. We consider models in which a classical machine can leverage the power of a quantum device, which may be affected by noise or behave adversarially. We present protocols and tools for detecting errors in a quantum machine and estimate how serious the deviation is. We construct a non-interactive protocol that enables a purely classical party to delegate any quantum computation to an untrusted quantum prover. In the setting of error-prone quantum hardware, we employ formal methods to construct a logical system for reasoning about the robustness of a quantum algorithm design. We also study the limits of ideal quantum computers for computational tasks and give asymptotically optimal algorithms. In particular, we give quantum algorithms which provide speedups for the polynomial interpolation problem and show their optimality. Finally, we study the performance of quantum algorithms that learn properties of a matrix using queries that return its action on an input vector. In particular, we show that for various linear algebra problems, there is no quantum speedup, while for some problems, exponential speedups can be achieved

    Quantitative Robustness Analysis of Quantum Programs (Extended Version)

    Full text link
    Quantum computation is a topic of significant recent interest, with practical advances coming from both research and industry. A major challenge in quantum programming is dealing with errors (quantum noise) during execution. Because quantum resources (e.g., qubits) are scarce, classical error correction techniques applied at the level of the architecture are currently cost-prohibitive. But while this reality means that quantum programs are almost certain to have errors, there as yet exists no principled means to reason about erroneous behavior. This paper attempts to fill this gap by developing a semantics for erroneous quantum while-programs, as well as a logic for reasoning about them. This logic permits proving a property we have identified, called ϵ\epsilon-robustness, which characterizes possible "distance" between an ideal program and an erroneous one. We have proved the logic sound, and showed its utility on several case studies, notably: (1) analyzing the robustness of noisy versions of the quantum Bernoulli factory (QBF) and quantum walk (QW); (2) demonstrating the (in)effectiveness of different error correction schemes on single-qubit errors; and (3) analyzing the robustness of a fault-tolerant version of QBF.Comment: 34 pages, LaTeX; v2: fixed typo

    Improving gelation efficiency and cytocompatibility of visible light polymerized thiol-norbornene hydrogels via addition of soluble tyrosine

    Get PDF
    Hydrogels immobilized with biomimetic peptides have been used widely for tissue engineering and drug delivery applications. Photopolymerization has been among the most commonly used techniques to fabricate peptide-immobilized hydrogels as it offers rapid and robust peptide immobilization within a crosslinked hydrogel network. Both chain-growth and step-growth photopolymerizations can be used to immobilize peptides within covalently crosslinked hydrogels. A previously developed visible light mediated step-growth thiol-norbornene gelation scheme has demonstrated efficient crosslinking of hydrogels composed of an inert poly(ethylene glycol)-norbornene (PEGNB) macromer and a small molecular weight bis-thiol linker, such as dithiothreitol (DTT). Compared with conventional visible light mediated chain-polymerizations where multiple initiator components are required, step-growth photopolymerized thiol-norbornene hydrogels are more cytocompatible for the in situ encapsulation of radical sensitive cells (e.g., pancreatic β-cells). This contribution explored visible light based crosslinking of various bis-cysteine containing peptides with macromer 8-arm PEGNB to form biomimetic hydrogels suitable for in situ cell encapsulation. It was found that the addition of soluble tyrosine during polymerization not only significantly accelerated gelation, but also improved the crosslinking efficiency of PEG-peptide hydrogels as evidenced by a decreased gel point and enhanced gel modulus. In addition, soluble tyrosine drastically enhanced the cytocompatibility of the resulting PEG-peptide hydrogels, as demonstrated by in situ encapsulation and culture of pancreatic MIN6 β-cells. This visible light based thiol-norbornene crosslinking mechanism provides an attractive gelation method for preparing cytocompatible PEG-peptide hydrogels for tissue engineering applications

    Chapter 37 Introduction to Section 6

    Get PDF
    This handbook addresses a growing list of challenges faced by regions and cities in the Pacific;Rim, drawing connections around the what, why, and how questions that are fundamental;to sustainable development policies and planning practices. These include the connection;between cities and surrounding landscapes, across different boundaries and scales; the persistence;of environmental and development inequities; and the growing impacts of global;climate change, including how physical conditions and social implications are being anticipated;and addressed. Building upon localized knowledge and contextualized experiences,;this edited collection brings attention to place-;based;approaches across the Pacific Rim and;makes an important contribution to the scholarly and practical understanding of sustainable;urban development models that have mostly emerged out of the Western experiences. Nine;sections, each grounded in research, dialogue, and collaboration with practical examples and;analysis, focus on a theme or dimension that carries critical impacts on a holistic vision of city-;landscape;development, such as resilient communities, ecosystem services and biodiversity,;energy, water, health, and planning and engagement.;This international edited collection will appeal to academics and students engaged in;research involving landscape architecture, architecture, planning, public policy, law, urban;studies, geography, environmental science, and area studies. It also informs policy makers,;professionals, and advocates of actionable knowledge and adoptable ideas by connecting;those issues with the Sustainable Development Goals (SDGs);of the United Nations. The;collection of writings presented in this book speaks to multiyear collaboration of scholars;through the APRU Sustainable Cities and Landscapes (SCL);Program and its global network,;facilitated by SCL Annual Conferences and involving more than 100 contributors;from more than 30 institutions

    Quantum Query Complexity with Matrix-Vector Products

    Get PDF
    We study quantum algorithms that learn properties of a matrix using queries that return its action on an input vector. We show that for various problems, including computing the trace, determinant, or rank of a matrix or solving a linear system that it specifies, quantum computers do not provide an asymptotic speedup over classical computation. On the other hand, we show that for some problems, such as computing the parities of rows or columns or deciding if there are two identical rows or columns, quantum computers provide exponential speedup. We demonstrate this by showing equivalence between models that provide matrix-vector products, vector-matrix products, and vector-matrix-vector products, whereas the power of these models can vary significantly for classical computation

    Memory effect on the multiphoton coherent destruction of tunneling in the electron transport of nanoscale systems driven by a periodic field: A generalized Floquet approach

    Get PDF
    This is the published version, also available here: http://dx.doi.org/10.1103/PhysRevB.79.235323.Time-dependent electron-transport processes are often studied in the wide-band limit. In this paper, a generalized Floquet approach beyond the wide-band limit is developed for the general treatment of memory effect on the virtually unexplored multiphoton (MP) coherent destruction of tunneling (CDT) phenomenon of periodically driven electrode-wire-electrode nanoscale systems. As a case study, we apply the approach for a detailed analysis of the electron-transport dc current in the electrode-quantum double dot-electrode system, showing the significance of memory effect as well as illustrating the origin of the MP-CDT phenomenon

    Chapter 37 Introduction to Section 6

    Get PDF
    This handbook addresses a growing list of challenges faced by regions and cities in the Pacific;Rim, drawing connections around the what, why, and how questions that are fundamental;to sustainable development policies and planning practices. These include the connection;between cities and surrounding landscapes, across different boundaries and scales; the persistence;of environmental and development inequities; and the growing impacts of global;climate change, including how physical conditions and social implications are being anticipated;and addressed. Building upon localized knowledge and contextualized experiences,;this edited collection brings attention to place-;based;approaches across the Pacific Rim and;makes an important contribution to the scholarly and practical understanding of sustainable;urban development models that have mostly emerged out of the Western experiences. Nine;sections, each grounded in research, dialogue, and collaboration with practical examples and;analysis, focus on a theme or dimension that carries critical impacts on a holistic vision of city-;landscape;development, such as resilient communities, ecosystem services and biodiversity,;energy, water, health, and planning and engagement.;This international edited collection will appeal to academics and students engaged in;research involving landscape architecture, architecture, planning, public policy, law, urban;studies, geography, environmental science, and area studies. It also informs policy makers,;professionals, and advocates of actionable knowledge and adoptable ideas by connecting;those issues with the Sustainable Development Goals (SDGs);of the United Nations. The;collection of writings presented in this book speaks to multiyear collaboration of scholars;through the APRU Sustainable Cities and Landscapes (SCL);Program and its global network,;facilitated by SCL Annual Conferences and involving more than 100 contributors;from more than 30 institutions
    • …
    corecore